beat doctor
Do machines actually beat doctors?
If you ask academic machine learning experts about the things that annoy them, high up the list is going to be overblown headlines about how machines are beating humans at some task where that is completely untrue. This is partially because reality is already so damn amazing there is no need for hyperbole. Most of Atari is solved. Professional transcriptionists lose to voice recongition systems. Object recognition has been counted on the machine side of the tally for years (albeit with a few more reservations). Considering the headlines we see, this may surprise many people. For someone who watches the medical AI space, it seems like a day can't go by without some new article reporting on a new piece of research in which the journalists say machines are outperforming human doctors. I'm sure anyone who stumbles on this blog has seen many of them. I didn't even have to search for these. Almost all of them are still at the top of my Twitter feed.
Google's AI system can beat doctors at detecting breast cancer
London (CNN Business)Google (GOOGL) says it has developed an artificial intelligence system that can detect the presence of breast cancer more accurately than doctors. A study that tested the accuracy of the system, which was developed through a collaboration between the tech giant and cancer researchers, was published Wednesday in the scientific journal Nature. The program was trained to detect cancer using tens of thousands of mammograms from women in the United Kingdom and the United States, and early research shows it can produce more accurate detection than human radiologists. According to the study, using the AI technology resulted in fewer false positives, where test results suggest cancer is present when it isn't, and false negatives, where an existing cancer goes undetected. Compared to human experts, the program reduced false positives by 5.7% for US subjects and 1.2% for UK subjects.
- North America > United States (0.26)
- Europe > United Kingdom > England > Greater London > London (0.26)
Google's DeepMind A.I. beats doctors in breast cancer screening trial
Artificial Intelligence (AI) powered by Google's DeepMind algorithm may be more accurate at spotting breast cancer than real life doctors. The findings, published in Nature.com on Wednesday, come after researchers from Imperial College London and Google Health "trained" a computer to spot abnormalities on X-ray images of nearly 29,000 women. Separate studies used imagery from U.K. and U.S. women and concluded that in both countries the computer reduced instances where a cancer was either incorrectly identified or incorrectly missed. In the United States, the improvement was more noticeable -- offering a reported reduction of 5.7% in false positives, where a mammogram is wrongly diagnosed as abnormal. There was also a reduction of 9.4% in false negatives, where a cancer is missed. "In an independent study of six radiologists, the AI system outperformed all of the human readers," claimed the report.
- Europe > United Kingdom (0.37)
- North America > United States (0.27)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Health & Medicine > Therapeutic Area > Oncology > Breast Cancer (0.66)
An AI startup that claimed it can beat doctors in an exam is putting $100 million into creating 500 new jobs
British medical startup Babylon Health will invest $100 million in hiring more than 500 researchers, scientists, and engineers over the next year to develop the use of artificial intelligence in healthcare. Chief executive Ali Parsa said this would bring Babylon's current team to more than 1,000 staff, and that the company would eventually move into new headquarters in London over the next 18 months to house the additional recruits. The firm is currently headquartered in Kensington, London, and plans to take over the rest of the offices in its building. The artificial intelligence research will build on software that Babylon showed off in late June. The company claimed at the time that its AI could assess common conditions more accurately than doctors.
AI System "BioMind" beats Doctors in China
An artificial intelligence (AI) system scored 2:0 against elite human physicians in two rounds of competitions in diagnosing brain tumors and predicting hematoma expansion in Beijing. The BioMind AI system made correct diagnoses in 87 percent of 225 cases in about 15 minutes, while a team of 15 senior doctors only achieved 66-percent accuracy.
- Health & Medicine (1.00)
- Retail > Online (0.33)
AI Healthcare Firm Claims Its Chatbot Can Beat Doctors at Medical Exams
On Wednesday night, doctors at London's Royal College of Physicians were subjected to the world's first demonstration of an artificial intelligence (AI) robot performing a clinical test. The point of the event was to show how well the chatbot, engineered by digital medicare startup Babylon Health, would perform at the MRCGP exam, the Royal College of General Practitioners final test for trainee doctors. In the last five years, general practitioners have averaged a 72% score at the exam, declared director at Babylon Health Dr. Mobasher Butt before announcing to the audience his bot's score. "It got 82%," said the medical expert as people began to clap. "Tonight's results clearly illustrate how AI-augmented health services can reduce the burden on healthcare systems around the world. Our mission is to put accessible and affordable health services into the hands of every person on Earth," said Dr Ali Parsa, Babylon Health's founder and CEO, in a statement posted after the event.
Do machines actually beat doctors? ROC curves and performance metrics
Deep learning research in medicine is a bit like the Wild West at the moment; sometimes you find gold, sometimes a giant steampunk spider-bot causes a ruckus. This has derailed my series on whether AI will be replacing doctors soon, as I have felt the need to focus a bit more on how to assess the quality of medical AI research. I wanted to start closing out my series on the role of AI in medicine. What has happened instead is that several papers have claimed to beat doctors, and have failed to justify these claims. Despite this, and despite not going through peer review, the groups involved have issued press releases about their achievements, marketing the results direct to the public and the media. I don't think this is malicious. I think there is a cultural divide between the machine learning and the medical communities, a different way of doing research, and a different level of evidence required for making strong claims. If I have to be honest, I think the machine learning community has a fair bit to learn from medical research in this regard. Last time I made a set of three rules about how to assess medical AI research, and the third was a glib recommendation to "actually read the paper".
[D] Do machines actually beat doctors? ROC curves and performance metrics • r/MachineLearning
One of the things I am trying to do this year is some more technical posts (following up on some issues I have noticed at the intersection between medicine and machine learning). This is the first in a little mini-series on performance testing. Medical research has a different way of doing things, being more cautious about making claims and a bit more rigorous in justifying them, both of which are useful ideas to apply more broadly in machine learning (particularly at the applied end). While performance testing is often considered basic knowledge, one of my supervisors/colleagues is a bit of a ROC expert so I hope I can pass on some new ways of looking at things that are interesting even for some of the more knowledgeable folks around here.